时间序列在整个科学中进行测量和分析。量化时间序列结构的一种方法是计算一组摘要统计信息或“特征”,然后根据其作为特征向量的属性来表示时间序列。最终的特征空间是可解释且信息丰富的,并且可以将常规的统计学习方法(包括聚类,回归和分类)应用于时间序列数据集。许多用于计算时间序列功能的开源软件包都存在多种编程语言,包括catch22(22个功能:Matlab,R,Python,Julia),盛宴(42个功能:R),TSFeatures(63个功能:R) ,Kats(40个功能:Python),Tsfresh(779个功能:Python)和TSFEL(390个功能:Python)。但是,有几个问题:(i)目前尚不可用的这些软件包的单一访问点; (ii)要访问所有功能集,用户必须流利多种语言; (iii)这些功能 - 萃取软件包缺乏用于执行基于特征的时间序列分析的广泛伴随的方法论,例如时间序列分类的应用。在这里,我们在称为盗窃:处理时间序列提取功能的工具的R软件包中介绍了这些问题。盗窃是从上面列出的六个开源时间序列特征集中计算功能的统一且可扩展​​的框架。它还包括一套用于处理和解释提取功能的性能的功能,包括广泛的数据可视化模板,低维投影和时间序列分类操作。随着科学和行业中时间序列数据集的数量和复杂性的增加,盗窃提供了一个标准化的框架,以全面量化和解释时间序列中的信息结构。
translated by 谷歌翻译
Rapid advancements in collection and dissemination of multi-platform molecular and genomics data has resulted in enormous opportunities to aggregate such data in order to understand, prevent, and treat human diseases. While significant improvements have been made in multi-omic data integration methods to discover biological markers and mechanisms underlying both prognosis and treatment, the precise cellular functions governing these complex mechanisms still need detailed and data-driven de-novo evaluations. We propose a framework called Functional Integrative Bayesian Analysis of High-dimensional Multiplatform Genomic Data (fiBAG), that allows simultaneous identification of upstream functional evidence of proteogenomic biomarkers and the incorporation of such knowledge in Bayesian variable selection models to improve signal detection. fiBAG employs a conflation of Gaussian process models to quantify (possibly non-linear) functional evidence via Bayes factors, which are then mapped to a novel calibrated spike-and-slab prior, thus guiding selection and providing functional relevance to the associations with patient outcomes. Using simulations, we illustrate how integrative methods with functional calibration have higher power to detect disease related markers than non-integrative approaches. We demonstrate the profitability of fiBAG via a pan-cancer analysis of 14 cancer types to identify and assess the cellular mechanisms of proteogenomic markers associated with cancer stemness and patient survival.
translated by 谷歌翻译
There is significant interest in deploying machine learning algorithms for diagnostic radiology, as modern learning techniques have made it possible to detect abnormalities in medical images within minutes. While machine-assisted diagnoses cannot yet reliably replace human reviews of images by a radiologist, they could inform prioritization rules for determining the order by which to review patient cases so that patients with time-sensitive conditions could benefit from early intervention. We study this scenario by formulating it as a learning-augmented online scheduling problem. We are given information about each arriving patient's urgency level in advance, but these predictions are inevitably error-prone. In this formulation, we face the challenges of decision making under imperfect information, and of responding dynamically to prediction error as we observe better data in real-time. We propose a simple online policy and show that this policy is in fact the best possible in certain stylized settings. We also demonstrate that our policy achieves the two desiderata of online algorithms with predictions: consistency (performance improvement with prediction accuracy) and robustness (protection against the worst case). We complement our theoretical findings with empirical evaluations of the policy under settings that more accurately reflect clinical scenarios in the real world.
translated by 谷歌翻译
This paper describes Waymo's Collision Avoidance Testing (CAT) methodology: a scenario-based testing method that evaluates the safety of the Waymo Driver Automated Driving Systems' (ADS) intended functionality in conflict situations initiated by other road users that require urgent evasive maneuvers. Because SAE Level 4 ADS are responsible for the dynamic driving task (DDT), when engaged, without immediate human intervention, evaluating a Level 4 ADS using scenario-based testing is difficult due to the potentially infinite number of operational scenarios in which hazardous situations may unfold. To that end, in this paper we first describe the safety test objectives for the CAT methodology, including the collision and serious injury metrics and the reference behavior model representing a non-impaired eyes on conflict human driver used to form an acceptance criterion. Afterward, we introduce the process for identifying potentially hazardous situations from a combination of human data, ADS testing data, and expert knowledge about the product design and associated Operational Design Domain (ODD). The test allocation and execution strategy is presented next, which exclusively utilize simulations constructed from sensor data collected on a test track, real-world driving, or from simulated sensor data. The paper concludes with the presentation of results from applying CAT to the fully autonomous ride-hailing service that Waymo operates in San Francisco, California and Phoenix, Arizona. The iterative nature of scenario identification, combined with over ten years of experience of on-road testing, results in a scenario database that converges to a representative set of responder role scenarios for a given ODD. Using Waymo's virtual test platform, which is calibrated to data collected as part of many years of ADS development, the CAT methodology provides a robust and scalable safety evaluation.
translated by 谷歌翻译
With increasing number of crowdsourced private automatic weather stations (called TPAWS) established to fill the gap of official network and obtain local weather information for various purposes, the data quality is a major concern in promoting their usage. Proper quality control and assessment are necessary to reach mutual agreement on the TPAWS observations. To derive near real-time assessment for operational system, we propose a simple, scalable and interpretable framework based on AI/Stats/ML models. The framework constructs separate models for individual data from official sources and then provides the final assessment by fusing the individual models. The performance of our proposed framework is evaluated by synthetic data and demonstrated by applying it to a re-al TPAWS network.
translated by 谷歌翻译
A growing ecosystem of large, open-source foundation models has reduced the labeled data and technical expertise necessary to apply machine learning to many new problems. Yet foundation models pose a clear dual-use risk, indiscriminately reducing the costs of building both harmful and beneficial machine learning systems. To mitigate this risk, we propose the task blocking paradigm, in which foundation models are trained with an additional mechanism to impede adaptation to harmful tasks while retaining good performance on desired tasks. We call the resulting models self-destructing models, inspired by mechanisms that prevent adversaries from using tools for harmful purposes. We present an algorithm for training self-destructing models leveraging techniques from meta-learning and adversarial learning, showing that it can largely prevent a BERT-based model from learning to perform gender identification without harming the model's ability to perform profession classification. We conclude with a discussion of future directions.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
Existing metrics for evaluating the quality of automatically generated questions such as BLEU, ROUGE, BERTScore, and BLEURT compare the reference and predicted questions, providing a high score when there is a considerable lexical overlap or semantic similarity between the candidate and the reference questions. This approach has two major shortcomings. First, we need expensive human-provided reference questions. Second, it penalises valid questions that may not have high lexical or semantic similarity to the reference questions. In this paper, we propose a new metric, RQUGE, based on the answerability of the candidate question given the context. The metric consists of a question-answering and a span scorer module, in which we use pre-trained models from the existing literature, and therefore, our metric can be used without further training. We show that RQUGE has a higher correlation with human judgment without relying on the reference question. RQUGE is shown to be significantly more robust to several adversarial corruptions. Additionally, we illustrate that we can significantly improve the performance of QA models on out-of-domain datasets by fine-tuning on the synthetic data generated by a question generation model and re-ranked by RQUGE.
translated by 谷歌翻译
3D Flash LiDAR是传统扫描激光雷达系统的替代方法,有望在紧凑的外形尺寸中进行精确的深度成像,并且没有运动部件,例如自动驾驶汽车,机器人技术和增强现实(AR)等应用。通常在图像传感器格式中使用单光子,直接飞行时间(DTOF)接收器实施,设备的操作可能会受到需要在室外场景中处理和压缩的大量光子事件的阻碍以及对较大数组的可扩展性。我们在这里提出了一个64x32像素(256x128 spad)DTOF成像器,该成像器通过将像素与嵌入式直方图使用像素一起克服这些局限性,该直方直方图锁定并跟踪返回信号。这大大降低了输出数据帧的大小,可在10 kfps范围内或100 kfps的最大帧速率进行直接深度读数。该传感器可选择性地读数检测表面或传感运动的像素,从而减少功耗和片外处理要求。我们演示了传感器在中端激光雷达中的应用。
translated by 谷歌翻译
我们提出了GAAF(一种广义自动解剖器查找器),用于鉴定3D CT扫描中的通用解剖位置。GAAF是端到端管道,具有专用模块用于数据预处理,模型培训和推理。GAAF以核心使用自定义卷积神经网络(CNN)。CNN型号很小,轻巧,可以调整以适合特定应用。到目前为止,GAAF框架已经在头部和颈部进行了测试,并且能够找到解剖位置,例如脑干的质量中心。GAAF在开放式数据集中进行了评估,并且能够准确稳健地定位性能。我们所有的代码都是开源的,可在https://github.com/rrr-uom-projects/gaaf上找到。
translated by 谷歌翻译